Wellcome to Open Reading Group's library,

  You can find here all papers liked or uploaded by Open Reading Group
  together with brief user bio and description of her/his academic activity.


### Upcoming readings: No upcoming readings for now... ### Past Readings: - 21/05/2018 [Graph Embedding Techniques, Applications, and Performance: A Survey](https://papers-gamma.link/paper/52) - 14/05/2018 [A Fast and Provable Method for Estimating Clique Counts Using Turán’s Theorem](https://papers-gamma.link/paper/24) - 07/05/2018 [VERSE: Versatile Graph Embeddings from Similarity Measures](https://papers-gamma.link/paper/48) - 30/04/2018 [Hierarchical Clustering Beyond the Worst-Case](https://papers-gamma.link/paper/45) - 16/04/2018 [Scalable Motif-aware Graph Clustering](https://papers-gamma.link/paper/18) - 02/04/2018 [Practical Algorithms for Linear Boolean-width](https://papers-gamma.link/paper/40) - 26/03/2018 [New Perspectives and Methods in Link Prediction](https://papers-gamma.link/paper/28/New%20Perspectives%20and%20Methods%20in%20Link%20Prediction) - 19/03/2018 [In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond](https://papers-gamma.link/paper/37) - 12/03/2018 [Diversity is All You Need: Learning Skills without a Reward Function](https://papers-gamma.link/paper/36) - 05/03/2018 [When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors](https://papers-gamma.link/paper/23) - 26/02/2018 [Fast Approximation of Centrality](https://papers-gamma.link/paper/35/Fast%20Approximation%20of%20Centrality) - 19/02/2018 [Indexing Public-Private Graphs](https://papers-gamma.link/paper/19/Indexing%20Public-Private%20Graphs) - 12/02/2018 [On the uniform generation of random graphs with prescribed degree sequences](https://papers-gamma.link/paper/26/On%20the%20uniform%20generation%20of%20random%20graphs%20with%20prescribed%20d%20egree%20sequences) - 05/02/2018 [Linear Additive Markov Processes](https://papers-gamma.link/paper/21/Linear%20Additive%20Markov%20Processes) - 29/01/2018 [ESCAPE: Efficiently Counting All 5-Vertex Subgraphs](https://papers-gamma.link/paper/17/ESCAPE:%20Efficiently%20Counting%20All%205-Vertex%20Subgraphs) - 22/01/2018 [The k-peak Decomposition: Mapping the Global Structure of Graphs](https://papers-gamma.link/paper/16/The%20k-peak%20Decomposition:%20Mapping%20the%20Global%20Structure%20of%20Graphs)

Comments:

Read the paper, add your comments…

Comments:

Hard to understand for a newbie in Deep RL. A formal definition of what is a skill ("A skill is simply a policy.") would help. ### Typos: - "guaranteeing that is has maximum entropy" - "We discuss the the log p(z) term in Appendix B." - "so it much first gather momentum" - "While are skills are learned"
I agree with the previous comment. The article seems to aim at people who are already familiar with reinforcement (not necessarily deep, or based on neural networks I guess) and its usual benchmark. The implementations are not detailed, the authors lay stress on the general idea (which is relatively simple to get), and its visual results which look quite spectacular.
The problem from my point of view is that a scientist has close to no incentive to give additional context information. For example, an author would not add a link to a Wikipedia page, as it is considered (rightly or wrongly) under the scientific quality standards. Even worse: I have observed that in most scientific communities, it is expected that you don't give too much details on the some definitions, assumptions or computations if they are considered as "common knowledge". In other words, adding too much details is understood as a sign that you are an outsider of the community. In anyway, hyperlinks would not solve everything, as the writer still has to target a given "level of knowledge" of the reader, he would make the information understandable by a specific audience. Typically, when I read about a field which is new to me, I first go to Wikipedia. The level of details of a Wikipedia page is widely heterogeneous: some pages are too basic for what I already know, but others are too elaborate, and as you said, I am not willing to spend the "focusing time" necessary for me to understand such a page if I cannot even evaluate if I will have a much better understanding of the paper afterwards. The author is certainly more capable of evaluating what are the context information which are useful to understand what he writes, but you are right that it costs time to him as well. So, I think that in a way, the problem can be seen as an economic one, of which focusing time is the main resource.
> For example, an author would not add a link to a Wikipedia page, as it is considered (rightly or wrongly) under the scientific quality standards I would not agree, some scientists already write blog-posts about their beloved subjects using a lot of hyperlinks to Wikipedia and other sites, consider for example [Igor Pak's](https://igorpak.wordpress.com/2015/05/26/the-power-of-negative-thinking-part-i-pattern-avoidance/) or [Terence Tao's](https://terrytao.wordpress.com/2009/04/26/szemeredis-regularity-lemma-via-random-partitions/) blogs. Currently, such blog-posts are considered complementary to "real" scientific publications. But but one day everything will change.
Read the paper, add your comments…

Comments:

just updated a paper info, now it contains a complete list of authors.
Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12